最近,知识表示学习(KRL)正在作为对知识图(kgs)处理查询的最新方法的出现,其中kg实体和查询被嵌入到一个潜在空间中,以使回答查询的实体是嵌入在查询附近。然而,尽管对KRL进行了深入的研究,但大多数现有研究要么侧重于同质KG,要么承担kg完成任务(即缺失事实的推断),同时回答对具有多个方面的kgs的复杂逻辑查询(多视图kg)仍然是一个开放的挑战。为了弥合这一差距,在本文中,我们提出了罗马,这是一个新颖的KRL框架,用于回答多视图KGS的逻辑查询。与先前的工作相比,罗姆人在主要方面离开。 (i)它将多视图kg建模为一组覆盖子kg,每个kg对应于一种视图,该视图集成了文献中研究的许多类型的kg(例如,颞kg)。 (ii)它支持具有不同关系和视图约束的复杂逻辑查询(例如,具有复杂的拓扑和/或从多个视图中); (iii)它比例扩大到大小(例如,数百万个事实)和细粒状视图(例如,数十个观点); (iv)它概括地查询训练过程中未观察到的结构和kg观点。对现实世界KGS的广泛经验评估表明,\系统明显优于替代方法。
translated by 谷歌翻译
费米子神经网络(Ferminet)是最近提出的波函数ANSATZ,用于跨各种蒙特卡洛(VMC)方法来求解多电子Schr \“ {o} dinger dinger equation。确定性用于诱导反对称性。Ferminet被证明具有通用近似能力,具有单个决定因素,即在给定足够参数的情况下表示任何抗对称函数,但是,渐近计算的瓶颈来自slater slater slater slater scales the $ o,$ o缩放了$ o。 (n^3)$ for $ n $电子。在本文中,我们用成对的反对称构建代替Slater决定因素,这很容易实现,并且可以将计算成本降低到$ O(n^2)$。我们正式正式证明建立在排列量表的构造的成对结构可以普遍代表任何反对称功能。此外,可以通过连续实现这种普遍性当我们旨在表示地面波形时,近似值。
translated by 谷歌翻译
在对抗文献中,鲁棒性和准确性之间的权衡得到了广泛的研究。尽管仍然有争议,但普遍的观点是,从经验或理论上,这种权衡是固有的。因此,我们在对抗训练中挖掘了这种权衡的起源,发现它可能源于不当定义的可靠错误,该错误施加了局部不变性的诱导偏见 - 对平稳性的过度校正。鉴于此,我们主张采用局部模棱两可来描述健壮模型的理想行为,从而导致自洽的强大错误称为得分。根据定义,得分有助于稳健性与准确性之间的对帐,同时仍通过稳健优化处理最坏情况的不确定性。通过简单地将KL差异替换为距离指标的变体,得分可以有效地最小化。从经验上讲,我们的模型在AutoAttact下的强力板上实现了最高的性能。此外,得分提供了指导性见解,以解释在健壮模型上观察到的过度拟合现象和语义输入梯度。代码可在https://github.com/p2333/score上找到。
translated by 谷歌翻译
基于转移的对抗攻击可以评估黑框设置中的模型鲁棒性。几种方法表现出令人印象深刻的非目标转移性,但是,有效地产生有针对性的可转移性仍然具有挑战性。为此,我们开发了一个简单而有效的框架,以应用层次生成网络制作有针对性的基于转移的对抗性示例。特别是,我们有助于适应多级目标攻击的摊销设计。对Imagenet的广泛实验表明,我们的方法通过与现有方法相比,大幅度的余量提高了目标黑盒攻击的成功率 - 它的平均成功率为29.1 \%,而仅基于一个替代白盒的六种不同模型模型,大大优于最先进的基于梯度的攻击方法。此外,与基于梯度的方法相比,所提出的方法超出了数量级的效率也更有效。
translated by 谷歌翻译
正确分类对抗性示例是安全部署机器学习模型的必不可少但具有挑战性的要求。据抢救模型甚至是最先进的离职训练的模型,在CIFAR-10上努力超过67%的强大测试精度,这远非实用。互动的互补方法是引入拒绝选项,允许模型不返回对不确定输入的预测,自信是常用的确定性代理。随着这个例程,我们发现置信度和纠正的置信度(R-Con)可以形成两个耦合的拒绝度量,这可以从正确分类的次数中可以证明错误分类的输入。这种有趣的属性揭示了使用偶联策略来更好地检测和抑制对抗性实例。我们在包括自适应攻击的若干攻击下,在CiFar-10,CiFar-10-C和CiFar-100上评估我们的整流拒绝(RR)模块,并证明RR模块与改善稳健性的不同的对抗训练框架兼容额外的计算。代码可在https://github.com/p2333/Rectified-re注意到。
translated by 谷歌翻译
Deep neural networks are vulnerable to adversarial examples, which can mislead classifiers by adding imperceptible perturbations. An intriguing property of adversarial examples is their good transferability, making black-box attacks feasible in real-world applications. Due to the threat of adversarial attacks, many methods have been proposed to improve the robustness. Several state-of-the-art defenses are shown to be robust against transferable adversarial examples. In this paper, we propose a translation-invariant attack method to generate more transferable adversarial examples against the defense models. By optimizing a perturbation over an ensemble of translated images, the generated adversarial example is less sensitive to the white-box model being attacked and has better transferability. To improve the efficiency of attacks, we further show that our method can be implemented by convolving the gradient at the untranslated image with a pre-defined kernel. Our method is generally applicable to any gradient-based attack method. Extensive experiments on the ImageNet dataset validate the effectiveness of the proposed method. Our best attack fools eight state-of-the-art defenses at an 82% success rate on average based only on the transferability, demonstrating the insecurity of the current defense techniques.
translated by 谷歌翻译
Neural networks are vulnerable to adversarial examples, which poses a threat to their application in security sensitive systems. We propose high-level representation guided denoiser (HGD) as a defense for image classification. Standard denoiser suffers from the error amplification effect, in which small residual adversarial noise is progressively amplified and leads to wrong classifications. HGD overcomes this problem by using a loss function defined as the difference between the target model's outputs activated by the clean image and denoised image. Compared with ensemble adversarial training which is the state-of-the-art defending method on large images, HGD has three advantages. First, with HGD as a defense, the target model is more robust to either white-box or black-box adversarial attacks. Second, HGD can be trained on a small subset of the images and generalizes well to other images and unseen classes. Third, HGD can be transferred to defend models other than the one guiding it. In NIPS competition on defense against adversarial attacks, our HGD solution won the first place and outperformed other models by a large margin. 1 * Equal contribution.
translated by 谷歌翻译
Deep neural networks are vulnerable to adversarial examples, which poses security concerns on these algorithms due to the potentially severe consequences. Adversarial attacks serve as an important surrogate to evaluate the robustness of deep learning models before they are deployed. However, most of existing adversarial attacks can only fool a black-box model with a low success rate. To address this issue, we propose a broad class of momentum-based iterative algorithms to boost adversarial attacks. By integrating the momentum term into the iterative process for attacks, our methods can stabilize update directions and escape from poor local maxima during the iterations, resulting in more transferable adversarial examples. To further improve the success rates for black-box attacks, we apply momentum iterative algorithms to an ensemble of models, and show that the adversarially trained models with a strong defense ability are also vulnerable to our black-box attacks. We hope that the proposed methods will serve as a benchmark for evaluating the robustness of various deep models and defense methods. With this method, we won the first places in NIPS 2017 Non-targeted Adversarial Attack and Targeted Adversarial Attack competitions.
translated by 谷歌翻译
We propose a distributionally robust return-risk model for Markov decision processes (MDPs) under risk and reward ambiguity. The proposed model optimizes the weighted average of mean and percentile performances, and it covers the distributionally robust MDPs and the distributionally robust chance-constrained MDPs (both under reward ambiguity) as special cases. By considering that the unknown reward distribution lies in a Wasserstein ambiguity set, we derive the tractable reformulation for our model. In particular, we show that that the return-risk model can also account for risk from uncertain transition kernel when one only seeks deterministic policies, and that a distributionally robust MDP under the percentile criterion can be reformulated as its nominal counterpart at an adjusted risk level. A scalable first-order algorithm is designed to solve large-scale problems, and we demonstrate the advantages of our proposed model and algorithm through numerical experiments.
translated by 谷歌翻译
Forecasts by the European Centre for Medium-Range Weather Forecasts (ECMWF; EC for short) can provide a basis for the establishment of maritime-disaster warning systems, but they contain some systematic biases.The fifth-generation EC atmospheric reanalysis (ERA5) data have high accuracy, but are delayed by about 5 days. To overcome this issue, a spatiotemporal deep-learning method could be used for nonlinear mapping between EC and ERA5 data, which would improve the quality of EC wind forecast data in real time. In this study, we developed the Multi-Task-Double Encoder Trajectory Gated Recurrent Unit (MT-DETrajGRU) model, which uses an improved double-encoder forecaster architecture to model the spatiotemporal sequence of the U and V components of the wind field; we designed a multi-task learning loss function to correct wind speed and wind direction simultaneously using only one model. The study area was the western North Pacific (WNP), and real-time rolling bias corrections were made for 10-day wind-field forecasts released by the EC between December 2020 and November 2021, divided into four seasons. Compared with the original EC forecasts, after correction using the MT-DETrajGRU model the wind speed and wind direction biases in the four seasons were reduced by 8-11% and 9-14%, respectively. In addition, the proposed method modelled the data uniformly under different weather conditions. The correction performance under normal and typhoon conditions was comparable, indicating that the data-driven mode constructed here is robust and generalizable.
translated by 谷歌翻译